In this paper, we aim to address the large domain gap between high-resolution face images, e.g., from professional portrait photography, and low-quality surveillance images, e.g., from security cameras. Establishing an identity match between disparate sources like this is a classical surveillance face identification scenario, which continues to be a challenging problem for modern face recognition techniques. To that end, we propose a method that combines face super-resolution, resolution matching, and multi-scale template accumulation to reliably recognize faces from long-range surveillance footage, including from low quality sources. The proposed approach does not require training or fine-tuning on the target dataset of real surveillance images. Extensive experiments show that our proposed method is able to outperform even existing methods fine-tuned to the SCFace dataset.
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
这项工作总结了2022年2022年国际生物识别联合会议(IJCB 2022)的IJCB被遮挡的面部识别竞赛(IJCB-OCFR-2022)。OCFR-2022从学术界吸引了总共3支参与的团队。最终,提交了六个有效的意见书,然后由组织者评估。在严重的面部阻塞面前,举行了竞争是为了应对面部识别的挑战。参与者可以自由使用任何培训数据,并且通过使用众所周知的数据集构成面部图像的部分来构建测试数据。提交的解决方案提出了创新,并以所考虑的基线表现出色。这项竞争的主要输出是具有挑战性,现实,多样化且公开可用的遮挡面部识别基准,并具有明确的评估协议。
translated by 谷歌翻译
在本文中,我们提出了一个神经端到端系统,用于保存视频的语音,唇部同步翻译。该系统旨在将多个组件模型结合在一起,并以目标语言的目标语言与目标语言的原始扬声器演讲的视频与目标语音相结合,但在语音,语音特征,面对原始扬声器的视频中保持着重点。管道从自动语音识别开始,包括重点检测,然后是翻译模型。然后,翻译后的文本由文本到语音模型合成,该模型重新创建了原始句子映射的原始重点。然后,使用语音转换模型将结果的合成语音映射到原始扬声器的声音。最后,为了将扬声器的嘴唇与翻译的音频同步,有条件的基于对抗网络的模型生成了相对于输入面图像以及语音转换模型的输出的适应性唇部运动的帧。最后,系统将生成的视频与转换后的音频结合在一起,以产生最终输出。结果是一个扬声器用另一种语言说话的视频而不真正知道。为了评估我们的设计,我们介绍了完整系统的用户研究以及对单个组件的单独评估。由于没有可用的数据集来评估我们的整个系统,因此我们收集了一个测试集并在此测试集上评估我们的系统。结果表明,我们的系统能够生成令人信服的原始演讲者的视频,同时保留原始说话者的特征。收集的数据集将共享。
translated by 谷歌翻译
从社交媒体共享的静止图像中检测战斗是限制暴力场景分布所需的重要任务,以防止它们的负面影响。出于这个原因,在本研究中,我们解决了从网络和社交媒体收集的静止图像的战斗检测问题。我们探索一个人可以从单个静态图像中检测到战斗的程度。我们还提出了一个新的数据集,名为社交媒体的战斗图像(SMFI),包括现实世界的战斗行为图像。拟议数据集的广泛实验结果表明,可以从静止图像中成功识别战斗行动。也就是说,即使在不利用时间信息,也可以通过仅利用外观来检测高精度的斗争。我们还执行跨数据集实验以评估收集数据集的表示能力。这些实验表明,与其他计算机视觉问题一样,存在用于战斗识别问题的数据集偏差。虽然在同一战斗数据集上训练和测试时,该方法可实现接近100%的精度,但是交叉数据集精度显着降低,即,当更多代表性数据集用于培训时,约为70%。 SMFI数据集被发现是使用的五个战斗数据集中的两个最代表性的数据集之一。
translated by 谷歌翻译
卫生组织建议社会疏远,佩戴面罩,避免触摸面,以防止冠状病毒的传播。根据这些保护措施,我们开发了一种计算机视觉系统,以帮助防止Covid-19的传输。具体地,开发系统执行面部掩模检测,面部手互动检测,并测量社交距离。要培训和评估发达的系统,我们收集和注释图像,代表现实世界中的面部掩模使用和面部手互动。除了在自己的数据集上评估开发系统的性能外,还在文献中的现有数据集中测试了它,而不会对它们进行任何适应性。此外,我们提出了一个模块,以跟踪人之间的社交距离。实验结果表明,我们的数据集代表了真实世界的多样性。所提出的系统实现了面罩使用检测,面部手互动检测和在看不见的数据的真实情况下测量社会距离的高性能和泛化容量。数据集将在https://github.com/ilemeyiokur/covid-19-preventions-control -system中获得。
translated by 谷歌翻译
该报告说明了基于音频和视频数据的最成功的AAL应用程序和功能的艺术状态,即(i)生命式和自我监控,(ii)对生命体征的远程监控,(iii)情绪状态识别,((iv)食物摄入量监测,活动和行为认识,(v)活动和个人帮助,(vi)手势识别,(vii)秋季检测和预防,(viii)移动性评估和脆弱的识别以及(IX)认知和运动康复。对于这些应用程序方案,该报告说明了科学进步,可用产品和研究项目的状态。开放的挑战也被突出显示。
translated by 谷歌翻译
异常在所有科学领域都无处不在,并且由于对数据分布的不完整知识或突然进入发挥和扭曲观测的未知过程,因此可以表达意外事件。由于此类事件“稀有性,培训对异常检测(广告)任务的深入学习模型,科学家仅依赖于”正常“数据,即非异常样本。因此,让神经网络推断输入数据下方的分布。在这种情况下,我们提出了一种小说框架,名为多层单级分类(MOCCA),在广告任务中培训和测试深入学习模型。具体来说,我们将它应用于AutoEncoders。我们工作中的一个关键新颖性源于明确优化广告任务的中间陈述。实际上,与常用方法不同,将神经网络视为单个计算块,即,仅使用最后一层的输出,MOCCA明确地利用了深度架构的多层结构。每个层的特征空间在训练期间针对广告进行了优化,而在测试阶段,从训练的层提取的深表示混合以检测异常。使用Mocca,我们将培训过程分为两个步骤。首先,AutoEncoder仅在重建任务上培训。然后,我们只保留编码器任务,以最小化输出表示和参考点之间的L_2距离,在每个考虑的层上都是无异常的训练数据质心。随后,我们将在编码器模型的各种训练层中提取的深度特征组合以检测推理时间的异常。为了评估使用MOCCA培训的模型的性能,我们对公共数据集进行了广泛的实验。我们表明,我们的拟议方法对文献中可用的最先进的方法达到了可比或卓越的性能。
translated by 谷歌翻译
Object detectors are conventionally trained by a weighted sum of classification and localization losses. Recent studies (e.g., predicting IoU with an auxiliary head, Generalized Focal Loss, Rank & Sort Loss) have shown that forcing these two loss terms to interact with each other in non-conventional ways creates a useful inductive bias and improves performance. Inspired by these works, we focus on the correlation between classification and localization and make two main contributions: (i) We provide an analysis about the effects of correlation between classification and localization tasks in object detectors. We identify why correlation affects the performance of various NMS-based and NMS-free detectors, and we devise measures to evaluate the effect of correlation and use them to analyze common detectors. (ii) Motivated by our observations, e.g., that NMS-free detectors can also benefit from correlation, we propose Correlation Loss, a novel plug-in loss function that improves the performance of various object detectors by directly optimizing correlation coefficients: E.g., Correlation Loss on Sparse R-CNN, an NMS-free method, yields 1.6 AP gain on COCO and 1.8 AP gain on Cityscapes dataset. Our best model on Sparse R-CNN reaches 51.0 AP without test-time augmentation on COCO test-dev, reaching state-of-the-art. Code is available at https://github.com/fehmikahraman/CorrLoss
translated by 谷歌翻译
Development of guidance, navigation and control frameworks/algorithms for swarms attracted significant attention in recent years. That being said, algorithms for planning swarm allocations/trajectories for engaging with enemy swarms is largely an understudied problem. Although small-scale scenarios can be addressed with tools from differential game theory, existing approaches fail to scale for large-scale multi-agent pursuit evasion (PE) scenarios. In this work, we propose a reinforcement learning (RL) based framework to decompose to large-scale swarm engagement problems into a number of independent multi-agent pursuit-evasion games. We simulate a variety of multi-agent PE scenarios, where finite time capture is guaranteed under certain conditions. The calculated PE statistics are provided as a reward signal to the high level allocation layer, which uses an RL algorithm to allocate controlled swarm units to eliminate enemy swarm units with maximum efficiency. We verify our approach in large-scale swarm-to-swarm engagement simulations.
translated by 谷歌翻译